57 research outputs found
Logarithmic and Riesz Equilibrium for Multiple Sources on the Sphere --- the Exceptional Case
We consider the minimal discrete and continuous energy problems on the unit
sphere in the Euclidean space in the presence
of an external field due to finitely many localized charge distributions on
, where the energy arises from the Riesz potential (
is the Euclidean distance) for the critical Riesz parameter if and the logarithmic potential if . Individually, a
localized charge distribution is either a point charge or assumed to be
rotationally symmetric. The extremal measure solving the continuous external
field problem for weak fields is shown to be the uniform measure on the sphere
but restricted to the exterior of spherical caps surrounding the localized
charge distributions. The radii are determined by the relative strengths of the
generating charges. Furthermore, we show that the minimal energy points solving
the related discrete external field problem are confined to this support. For
, we show that for point sources on the sphere, the equilibrium
measure has support in the complement of the union of specified spherical caps
about the sources. Numerical examples are provided to illustrate our results.Comment: 23 pages, 4 figure
Direct and Inverse Results on Bounded Domains for Meshless Methods via Localized Bases on Manifolds
This article develops direct and inverse estimates for certain finite
dimensional spaces arising in kernel approximation. Both the direct and inverse
estimates are based on approximation spaces spanned by local Lagrange functions
which are spatially highly localized. The construction of such functions is
computationally efficient and generalizes the construction given by the authors
for restricted surface splines on . The kernels for which the
theory applies includes the Sobolev-Mat\'ern kernels for closed, compact,
connected, Riemannian manifolds.Comment: 29 pages. To appear in Festschrift for the 80th Birthday of Ian Sloa
Splines and Wavelets on Geophysically Relevant Manifolds
Analysis on the unit sphere found many applications in
seismology, weather prediction, astrophysics, signal analysis, crystallography,
computer vision, computerized tomography, neuroscience, and statistics.
In the last two decades, the importance of these and other applications
triggered the development of various tools such as splines and wavelet bases
suitable for the unit spheres , and the
rotation group . Present paper is a summary of some of results of the
author and his collaborators on generalized (average) variational splines and
localized frames (wavelets) on compact Riemannian manifolds. The results are
illustrated by applications to Radon-type transforms on and
.Comment: The final publication is available at http://www.springerlink.co
The impact of Stieltjes' work on continued fractions and orthogonal polynomials
Stieltjes' work on continued fractions and the orthogonal polynomials related
to continued fraction expansions is summarized and an attempt is made to
describe the influence of Stieltjes' ideas and work in research done after his
death, with an emphasis on the theory of orthogonal polynomials
Deep vs. shallow networks: An approximation theory perspective
© 2016 World Scientific Publishing Company. The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function - the ReLU function - used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning
An analysis of training and generalization errors in shallow and deep networks
© 2019 Elsevier Ltd This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data
- …